Goto

Collaborating Authors

 graph laplacian


Data driven estimation of Laplace-Beltrami operator

Neural Information Processing Systems

Approximations of Laplace-Beltrami operators on manifolds through graph Laplacians have become popular tools in data analysis and machine learning. These discretized operators usually depend on bandwidth parameters whose tuning remains a theoretical and practical problem. In this paper, we address this problem for the unormalized graph Laplacian by establishing an oracle inequality that opens the door to a well-founded data-driven procedure for the bandwidth selection. Our approach relies on recent results by Lacour and Massart (2015) on the so-called Lepski's method.





cffb6e2288a630c2a787a64ccc67097c-Paper.pdf

Neural Information Processing Systems

Inthis paper,we theoretically extend spectral-based graph convolution todigraphs and deriveasimplified form usingpersonalizedPageRank. Specifically,we present theDigraph Inception Convolutional Networks(DiGCN) whichutilizes digraph convolution andkth-order proximity to achievelarger receptivefields and learn multi-scale features in digraphs.






Manifold limit for the training of shallow graph convolutional neural networks

Tengler, Johanna, Brune, Christoph, Iglesias, José A.

arXiv.org Machine Learning

We study the discrete-to-continuum consistency of the training of shallow graph convolutional neural networks (GCNNs) on proximity graphs of sampled point clouds under a manifold assumption. Graph convolution is defined spectrally via the graph Laplacian, whose low-frequency spectrum approximates that of the Laplace-Beltrami operator of the underlying smooth manifold, and shallow GCNNs of possibly infinite width are linear functionals on the space of measures on the parameter space. From this functional-analytic perspective, graph signals are seen as spatial discretizations of functions on the manifold, which leads to a natural notion of training data consistent across graph resolutions. To enable convergence results, the continuum parameter space is chosen as a weakly compact product of unit balls, with Sobolev regularity imposed on the output weight and bias, but not on the convolutional parameter. The corresponding discrete parameter spaces inherit the corresponding spectral decay, and are additionally restricted by a frequency cutoff adapted to the informative spectral window of the graph Laplacians. Under these assumptions, we prove $Γ$-convergence of regularized empirical risk minimization functionals and corresponding convergence of their global minimizers, in the sense of weak convergence of the parameter measures and uniform convergence of the functions over compact sets. This provides a formalization of mesh and sample independence for the training of such networks.